Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Maddimsetty Bullaiaha Tej, Vamsi Krishna Bellam
DOI Link: https://doi.org/10.22214/ijraset.2021.39711
Certificate: View Certificate
People lost, people missing etc., these are the words we come across whenever there is any mass gathering events going on or in crowded areas. To solve this issue some traditional approaches like announcements are in use. One idea is to identify the person using face recognition and pattern matching techniques. There are several techniques to implement face recognition like extraction of facial features by using the position of eyes, nose, jawbone or skin texture analysis etc., By using these techniques a unique dataset can be created for each human. Here the photograph of the missing person can be used to extract these facial features. After getting the dataset of that individual, by using pattern matching techniques, there is a scope to find the person with same facial features in the crowd images or videos.
I. INTRODUCTION
Humans can’t spend time by staying in one place. We tend to move and workout with our daily tasks. For some those tasks could be a part of their job and some would go around places for having fun. Whatever the situation is, there are some places which will get more populated due to daily actions of humans like temples, airports, malls etc., In such areas, there is a chance of people getting separated from their team or family and it would take some time to reunite with them among the crowd.
A traditional way we generally observe in heavily populated areas to reunite lost people with their family is by making announcements using a central operated speaker system and announcing that they will be at some place and the others should be coming there in-order to reunite with them.
A. Motivation
The main motivation of our project is
B. Problem Statement
Now-a-days due to increase in population all over the world, people missing cases are increasing day by day. Though there are few services for finding the lost persons, they are not that effective. There are lots of pending missing cases recorded in police stations. Few areas like temples, malls, pilgrim areas are the places in India where there is higher chance to people getting separated from their families due to heavy crowd.
C. Scope
In this project we will be identifying the location of the person by using surveillance data from surveillance footage.
D. Objective
The objective of this paper is:
II. SOFTWARE REQUIREMENTS ANALYSIS
These are the software’s we are using in our project.
A. MEAN Stack
The MEAN stack is JavaScript-based framework for developing web applications. MEAN is named after MongoDB, Express, Angular, and Node, the four key technologies that make up the layers of the stack.
There are variations to the MEAN stack such as MERN (replacing Angular.js with React.js) and MEVN (using Vue.js). The MEAN stack is one of the most popular technology concepts for building web applications.
B. NPM
Often developing any application, you will need a lot of 3rd party libraries for your project and you need to learn to use a package manager to use these. Package manager for JavaScript gets installed with Node.js, Allows users to consume and distribute JavaScript modules (packages), Packages extend the functionality of your application and promotes reusability.
C. Ionic Framework
Ionic is a complete open-source SDK for hybrid mobile app development, built on top of Angular JS and Apache Cordova. Ionic provides tools and services for developing hybrid mobile apps using Web technologies like CSS, HTML5, and Sass.
D. Python
Python is a general-purpose interpreted, interactive, object-oriented, and high-level programming language. Python provides in-built libraries to do many computer vision operations such as face detection and face recognition. Open CV-Python is a library of Python bindings designed to solve computer vision problems. Open CV is supported by 2.7, 3.4, 3.5, and 3.6versions of python. Anyone of these versions is fine to work with.
III. HARDWARE REQUIREMENTS ANALYSIS
These are the hardware equipment that are required to gather data and process the information and display the result.
A. Surveillance Camera
Surveillance cameras are video cameras used for the purpose of observing an area. They are often connected to a recording device or IP network, and may be watched by a security guard or law enforcement officer. Cameras and recording equipment used to be relatively expensive and required human personnel to monitor camera footage, but analysis of footage has been made easier by automated software that organizes digital video footage into a searchable database, and by video analysis software (such as VIRAT and Human ID). The amount of footage is also drastically reduced by motion sensors which only record when motion is detected. With cheaper production techniques, surveillance cameras are simple and inexpensive enough to be used in home security systems, and for everyday surveillance.
B. Smartphone
Smart phones are a class of multi-purpose mobile computing device. They are distinguished from feature phones by their stronger hardware capabilities and extensive mobile operating systems, which facilitate wider software, internet (including web browsing over mobile broadband), and multimedia functionality (including music, video, cameras, and gaming), alongside core phone functions such as voice calls and text messaging. Smartphones typically include various sensors that can be leveraged by their software, such as a magnetometer, proximity sensors, barometer, gyroscope and accelerometer and support wireless communications protocols such as Bluetooth, Wi-Fi, and satellite navigation.
IV. PROPOSED METHODOLOGY
Images of the missing person are received from their families via a mobile application. From these images, facial features of that person are collected by using feature set extraction algorithms and that feature set is stored in a database. Now from the surveillance camera either video or image data of the crowd is taken. From those, extraction of faces is done. From those faces, feature set for each face is taken out by using feature set extraction algorithms. Now finding a relative match between the feature set in database and the feature sets extracted is done and identification of person is achieved. Thus, from the data of camera from which the face of the matched person is recorded, location of the person can also be identified.
A. Mobile Phone
Mobile Phone is the starting point of the process from the user view. Through mobile application developed by our team, from a mobile phone first user needs to have an account in our database. For registration user need to provide some credentials like phone number, email ID, user name etc., After finishing registration, user can register complaints by going into register complaint page. There user can be able to register complaint by giving some 32 details like missed person’s photos, name etc., After filling necessary details, mobile application will send those details to the server.
B. Surveillance Camera
Surveillance cameras are eyes of the whole system because they are responsible to collect the surveillance footage and send it to the server in order to process it. Surveillance camera will record the video of the crowd and sends it to the server in any specific format like mp4 or 3gp or AVI or mkv etc.,
C. Server
Server is the heart of the whole system as it is responsible for gathering the complaints from user, collecting surveillance data, maintaining database, processing the data by using specific algorithms. Server upon receiving the images form mobile application, generates a feature set of the lost person by using the images received in the complaint and stores it in it’s database. Later from the surveillance footage, feature sets of the people in crowd are also extracted and comparisons are done between these two datasets in order to find any matching for the missing person. Final status will be sent to the mobile application that may contain the location details of the lost person if found or any negative reply like person not found.
D. Algorithms
Algorithms are the brain part of the system as they are responsible for how to process the data and generate output. Server is responsible to run the algorithms and these algorithms are responsible for generating the outputs. Algorithms includes face detection algorithms, facial feature extraction algorithms, video to frame conversion algorithms, noise removal algorithms feature set matching algorithms etc., In this project we used naïve frame conversion algorithm to convert a video into frames.
V. CODING
Algorithm for mobile application working:
VI. TESTING
Testing is a process of detecting errors. It is a process of executing a program with the explicit intention of finding errors.
Checking the working of preview |
|||||||
TEST_ID: id 1 |
TEST DESIGNED BY: M.B. Tej |
||||||
MODULE_NAME: Mobile Application |
TEST DESIGNED DATE: 07-11-2018 |
||||||
TEST_TITLE: Testing preview |
TEST EXECUTION DATE: 07-11-2018 |
||||||
PRE CONDITIONS: Android application is needed to be installed into an android mobile |
|||||||
TEST DESCRIPTION :To test preview |
|||||||
TEST TYPE : Black box testing |
|||||||
STAGE |
TEST STAGE |
TEST DATA |
EXPETED RESULT |
ACTUAL RESULT |
STAT US |
REMARKS |
|
1 |
No preview obtained |
--- |
Mobile phone not compatible |
Mobile phone not compatible |
pass |
Error message is displayed |
|
2 |
Obtaining preview |
--- |
App is opened |
App is opened |
Pass |
App layout is displayed |
|
Checking the working of login page |
|||||||
TEST_ID : id 2 |
TEST DESIGNED BY : Vamsi |
||||||
MODULE_NAME: Mobile Application |
TEST DESIGNED DATE : 08-11-2018 |
||||||
TEST_TITLE :Testing login page |
TEST EXECUTION DATE: 08-11-2018 |
||||||
PRE CONDITIONS : Username and Password are needed to be given as input. |
|||||||
TEST DESCRIPTION : To test login page |
|||||||
TEST TYPE : Black box testing |
|||||||
STAGE |
TEST STAGE |
TEST DATA |
EXPETED RESULT |
ACTUAL RESULT |
STAT US |
REMARKS |
|
1 |
Not yet logged in to the account |
Invalid username and password |
Error message should be displayed |
Error message is displayed |
Pass |
Error message is displayed |
|
2 |
User’s own page is opened |
Valid username and password |
Log must be successful |
Logged in successfully |
Pass |
User’s profile page is displayed |
|
Table 6.1: Testing Mobile application environment and login page
Checking the working of Complaint registration |
|||||||
TEST_ID: id 3 |
TEST DESIGNED BY: Vamsi |
||||||
MODULE_NAME: Mobile Application |
TEST DESIGNED DATE: 09-11-2018 |
||||||
TEST_TITLE: Testing preview |
TEST EXECUTION DATE: 09-11-2018 |
||||||
PRE CONDITIONS: Android application is needed to be installed into an android mobile |
|||||||
TEST DESCRIPTION :To test complaint registration page |
|||||||
TEST TYPE : Black box testing |
|||||||
STAGE |
TEST STAGE |
TEST DATA |
EXPETED RESULT |
ACTUAL RESULT |
STAT US |
REMARKS |
|
1 |
No mails are received |
Invalid details |
Failure in registering complaint |
Complaint registration failed |
Pass |
Error message is displayed |
|
2 |
Mails describing the complaint are received |
Details like name, images, address |
Successful in registering complaint |
Complaint registration done successfully |
Pass |
Mails were received describing the complaint registered |
|
Checking the working of face detection |
|||||||
TEST_ID: id 4 |
TEST DESIGNED BY : M.B. Tej |
||||||
MODULE_NAME: Face Detection |
TEST DESIGNED DATE : 10-11-2018 |
||||||
TEST_TITLE: Testing face detection |
TEST EXECUTION DATE: 10-11-2018 |
||||||
PRE CONDITIONS : A system capable of running python programs. |
|||||||
TEST DESCRIPTION : To test face detection. |
|||||||
TEST TYPE : Black box testing |
|||||||
STAGE |
TEST STAGE |
TEST DATA |
EXPETED RESULT |
ACTUAL RESULT |
STAT US |
REMARKS |
|
1 |
No faces detected |
An image with no faces |
No faces detected |
No faces were detected |
Pass |
No images of faces were generated |
|
2 |
Faces are detected |
Image with few people’s faces |
People faces detected from the given image |
Few faces from the people in image are detected |
Pass |
Images of faces detected are generated |
|
Table 6.2: Testing Mobile Application Register Complaint page and Face detection program.
VII. RESULTS
A. Result of Video to Frame Conversion Module
This is the result of video to frame conversion code. A video of 40 seconds is used to test it. It generated 1197 frames
B. Result of Face Detection Module
This is the output for face detection code. A sample image is given as input and the below shown image is generated as output along with faces cropped collected into a folder.
Thus, mobile application module is completed and video to frame conversion module is completed and face detection from images is completed successfully. Mobile application is used to register a complaint which will be sent to the server, server takes the surveillance video and performs frame generation operation and face detection from images. The output of the system is based on the images sent while registering complaint and the quality of video sent by surveillance camera. Video processing operations like filtering, masking etc., are yet to be done. Along with it face recognition module is also need to be implemented.
[1] Ya Wang, Tianlong Bao, Chunhui Ding, Ming Zhu - “Face Recognition in Real-world Surveillance Videos with Deep Learning Method” - 2017 2nd International Conference on Image, Vision and Computing. [2] M. Chithra – “Person Identification From Video by Similarity Between Feature Set”- IEEE Sponsored 2nd International Conference on Innovations in Information Embedded and Communication Systems. [3] Jalendu Dhamija, Tanupriya Choudhury, Praveen Kumar, Yogesh Singh Rathore - “An advancement towards efficient Face Recognition using Live video feed” – 2017 International Conference on Computational Intelligence and Networks. [4] Lerato Masupha, Tranos Zuva, Seleman Ngwira, Omobayo Esan - “Face Recognition Techniques, their Advantages, Disadvantages and Performance Evaluation” – IEEE 2015. [5] Roger S. Pressman “Software Engineering: A Practitioners Approach”, Edition-7, 2014.
Copyright © 2022 Maddimsetty Bullaiaha Tej, Vamsi Krishna Bellam. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET39711
Publish Date : 2021-12-30
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here